12 research outputs found

    Threshold and Proactive Pseudo-Random Permutations

    Get PDF
    We construct a reasonably efficient threshold and proactive pseudo-random permutation (PRP). Our protocol needs only O(1) communication rounds. It tolerates up to (n-1)/2 of n dishonest servers in the semi-honest environment. Many protocols that use PRPs (e.g., a CBC block cipher mode) can now be translated into the distributed setting. Our main technique for constructing invertible threshold PRPs is a distributed Luby-Rackoff construction where both the secret keys *and* the input are shared among the servers. We also present protocols for obliviously computing pseudo-random functions by Naor-Reingold and Dodis-Yampolskiy with shared input and keys

    and the Sum-of-Squares Partition Problem

    No full text
    Abstract We propose a simple game for modeling containment of the spread of viruses in a graph of nnodes. Each node must choose to either install anti-virus software at some known cost C, or riskinfection and a loss L if a virus that starts at a random initial point in the graph can reach itwithout being stopped by some intermediate node. The goal of individual nodes is to minimize their individual expected cost. We prove many game theoretic properties of the model, includingan easily applied characterization of Nash equilibria, culminating in our showing that allowing selfish users to choose Nash equilibrium strategies is highly undesirable, because the price ofanarchy is an unacceptable \Theta ( n) in the worst case. This shows in particular that a centralizedsolution can give a much better total cost than an equilibrium solution. Though it is NP-hardto compute such a social optimum, we show that the problem can be reduced to a previously unconsidered combinatorial problem that we call the sum-of-squares partition problem.Using a greedy algorithm based on sparse cuts, we show that this problem can be approximated to within a factor of O(log2 n), giving the same approximation ratio for the inoculation game

    Towards a Theory of Data Entanglement

    No full text
    We propose a formal model for data entanglement as used in storage systems like Dagster [25] and Tangler [26]. These systems split data into blocks in such a way that a single block becomes a part of several documents; these documents are said to be entangled. Dagster and Tangler use entanglement in conjunction with other techniques to deter a censor from tampering with unpopular data. In this paper, we assume that entanglement is a goal in itself. We measure the strength of a system by how thoroughly documents are entangled with one another and how attempting to remove a document a#ects the other documents in the system. We argue that while Dagster and Tangler achieve their stated goals, they do not achieve ours. In particular, we prove that deleting a typical document in Dagster a#ects, on average, only a constant number of other documents; in Tangler, it a#ects virtually no other documents. This motivates us to propose two stronger notions of entanglement, called dependency and all-or-nothing integrity. All-or-nothing integrity binds the users' data so that it is hard to delete or modify the data of any one user without damaging the data of all users. We study these notions in six submodels, di#erentiated by the choice of users' recovery algorithms and restrictions placed on the adversary. In each of these models, we not only provide mechanisms for limiting the damage done by the adversary, but also argue, under reasonable cryptographic assumptions, that no stronger mechanisms are possible

    Artificial creativity augmentation

    No full text
    Creativity has been associated with multifarious descriptions whereby one exemplary common definition depicts creativity as the generation of ideas that are perceived as both novel and useful within a certain social context. In the face of adversarial conditions taking the form of global societal challenges from climate change over AI risks to technological unemployment, this paper motivates future research on artificial creativity augmentation (ACA) to indirectly support the generation of requisite defense strategies and solutions. This novel term is of ambiguous nature since it subsumes two research directions: (1) artificially augmenting human creativity, but also (2) augmenting artificial creativity. In this paper, we examine and extend recent creativity research findings from psychology and cognitive neuroscience to identify potential indications on how to work towards (1). Moreover, we briefly analyze how research on (1) could possibly inform progress towards (2). Overall, while human enhancement but also the implementation of powerful AI are often perceived as ethically controversial, future ACA research could even appear socially desirable

    Artificial creativity augmentation

    No full text
    Creativity has been associated with multifarious descriptions whereby one exemplary common definition depicts creativity as the generation of ideas that are perceived as both novel and useful within a certain social context. In the face of adversarial conditions taking the form of global societal challenges from climate change over AI risks to technological unemployment, this paper motivates future research on artificial creativity augmentation (ACA) to indirectly support the generation of requisite defense strategies and solutions. This novel term is of ambiguous nature since it subsumes two research directions: (1) artificially augmenting human creativity, but also (2) augmenting artificial creativity. In this paper, we examine and extend recent creativity research findings from psychology and cognitive neuroscience to identify potential indications on how to work towards (1). Moreover, we briefly analyze how research on (1) could possibly inform progress towards (2). Overall, while human enhancement but also the implementation of powerful AI are often perceived as ethically controversial, future ACA research could even appear socially desirable

    Error-Correction for AI Safety

    No full text
    The complex socio-technological debate underlying safety-critical and ethically relevant issues pertaining to AI development and deployment extends across heterogeneous research subfields and involves in part conflicting positions. In this context, it seems expedient to generate a minimalistic joint transdisciplinary basis disambiguating the references to specific subtypes of AI properties and risks for an error-correction in the transmission of ideas. In this paper, we introduce a high-level transdisciplinary system clustering of ethical distinction between antithetical clusters of Type I and Type II systems which extends a cybersecurity-oriented AI safety taxonomy with considerations from psychology. Moreover, we review relevant Type I AI risks, reflect upon possible epistemological origins of hypothetical Type II AI from a cognitive sciences perspective and discuss the related human moral perception. Strikingly, our nuanced transdisciplinary analysis yields the figurative formulation of the so-called AI safety paradox identifying AI control and value alignment as conjugate requirements in AI safety. Against this backdrop, we craft versatile multidisciplinary recommendations with ethical dimensions tailored to Type II AI safety. Overall, we suggest proactive and importantly corrective instead of prohibitive methods as common basis for both Type I and Type II AI safety
    corecore